By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Using massive datasets to train large-scale models has emerged as a dominant approach for broad generalization in natural language and vision applications. In reinforcement learning, however, a key challenge is that available data of sequential decision making is often not annotated with actions - for example, videos of game-play are much more available than sequences of frames paired with their logged game controls. We propose to circumvent this challenge by combining large but sparsely-annotated datasets from a \emph{target} environment of interest with fully-annotated datasets from various other \emph{source} environments. Our method, Action Limited PreTraining (ALPT), leverages the generalization capabilities of inverse dynamics modelling (IDM) to label missing action data in the target environment. We show that utilizing even one additional environment dataset of labelled data during IDM pretraining gives rise to substantial improvements in generating action labels for unannotated sequences. We evaluate our method on benchmark game-playing environments and show that we can significantly improve game performance and generalization capability compared to other approaches, using annotated datasets equivalent to only $12$ minutes of gameplay. Highlighting the power of IDM, we show that these benefits remain even when target and source environments share no common actions.
translated by 谷歌翻译
Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. Prior methods learn a 1-step dynamics model, which predicts the next state given the current state and action. These models do not immediately tell the agent which actions to take, but must be integrated into a larger RL framework. Can we model the environment dynamics in a different way, such that the learned model does directly indicate the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step model of the environment dynamics. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex continuous control benchmarks.
translated by 谷歌翻译
In offline reinforcement learning (RL), a learner leverages prior logged data to learn a good policy without interacting with the environment. A major challenge in applying such methods in practice is the lack of both theoretically principled and practical tools for model selection and evaluation. To address this, we study the problem of model selection in offline RL with value function approximation. The learner is given a nested sequence of model classes to minimize squared Bellman error and must select among these to achieve a balance between approximation and estimation error of the classes. We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal oracle inequalities up to logarithmic factors. The algorithm, ModBE, takes as input a collection of candidate model classes and a generic base offline RL algorithm. By successively eliminating model classes using a novel one-sided generalization test, ModBE returns a policy with regret scaling with the complexity of the minimally complete model class. In addition to its theoretical guarantees, it is conceptually simple and computationally efficient, amounting to solving a series of square loss regression problems and then comparing relative square loss between classes. We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
translated by 谷歌翻译
进化策略(ES)算法由于其巨大的并行能力,简单的实现,有效的参数空间探索和快速训练时间,在训练复杂的机器人控制策略中显示出令人鼓舞的结果。但是,ES的关键限制是其对大容量模型(包括现代神经网络体系结构)的可扩展性。在这项工作中,我们开发了预测信息增强随机搜索(PI-ARS),以通过利用表示表示学习来减少ES的参数搜索空间来减轻这种限制。即,PI-ARS将基于梯度的表示技术,预测信息(PI)与无梯度ES算法,增强随机搜索(ARS)结合在一起,以训练可以处理复杂机器人感觉输入并处理高度非线性机器人的策略动力学。我们在一系列具有挑战性的视觉范围任务上评估了PI-ARS,四倍的机器人需要在不平坦的踏脚石,Quincuncial Pile和移动平台上行走,并完成室内导航任务。在所有任务中,与ARS基线相比,PI-ARS表现出明显更好的学习效率和表现。我们通过证明学识渊博的政策可以成功地转移到真正的四倍机器人的情况下,进一步验证我们的算法,例如,在现实世界中的垫脚石环境上取得了100%的成功率,从而显着提高了先前的结果,从而实现了40%的成功。
translated by 谷歌翻译
强化学习(RL)的经典理论主要集中在单个任务设置上,在该设备设置中,代理商学会通过反复试验的经验来解决任务,仅从该任务中访问数据。但是,许多最近的经验工作表明,利用跨多个相关任务训练的联合代表的实践好处。在这项工作中,我们从理论上分析了这样的设置,将与任务相关性的概念形式化为共享的状态行动表示,该表示在所有任务中都接受线性动态。我们介绍了用于Multitask MatrixRl的共享matrixrl算法。在$ p $ dimension $ d $共享联合$ r \ ll d $低维表示的情况下,我们向$ o o提高了对$ p $任务的遗憾(phd \ sqrt { nh})$ to $ o(((HD \ sqrt {rp} + hp \ sqrt {rd})\ sqrt {nh})$ bo $ n $ n $ n $ episodes of horizo​​n $ h $。这些收益与上下文匪徒和RL中其他线性模型中观察到的收益一致。与以前研究过其他函数近似模型中多任务RL的工作相反,我们表明,在具有双线性优化的Oracle和有限状态作用空间的存在下,多任务矩阵的计算有效算法通过减少到Quadratic编程。我们还开发了一种简单的技术,可以从某些情节线性问题的遗憾上限中刮除$ \ sqrt {h} $ factor。
translated by 谷歌翻译
在这项工作中,我们研究了Bellman方程作为价值预测准确性的替代目标的使用。虽然Bellman方程在所有状态行动对上都是由真实值函数唯一求解的,但我们发现Bellman误差(方程式两侧之间的差异)对于值函数的准确性而言是一个差的代理。特别是,我们表明(1)由于从钟声方程的两侧取消,贝尔曼误差的幅度仅与到达真实值函数的距离相关,即使考虑了所有状态行动对,并且(2)在有限的数据制度中,可以通过无限多种次优的解决方案来完全满足钟手方程。这意味着可以最大程度地减少Bellman误差,而无需提高价值函数的准确性。我们通过一系列命题,说明性玩具示例和标准基准域中的经验分析来证明这些现象。
translated by 谷歌翻译
我们研究了批量策略优化中模型选择的问题:给定固定的部分反馈数据集和$ M $ Model类,学习具有与最佳模型类的策略具有竞争力的性能的策略。通过识别任何模型选择算法应最佳地折衷的错误,以线性模型类在与线性模型类中的内容匪徒设置中的问题正式化。(1)近似误差,(2)统计复杂性,(3 )覆盖范围。前两个来源是在监督学习的模型选择中常见的,在最佳的交易中,这些属性得到了很好的研究。相比之下,第三个源是批量策略优化的唯一,并且是由于设置所固有的数据集移位。首先表明,没有批处理策略优化算法可以同时实现所有三个的保证,展示批量策略优化的困难之间的显着对比,以及监督学习中的积极结果。尽管存在这种负面结果,但我们表明,在三个错误源中的任何一个都可以实现实现剩下的两个近乎oracle不平等的算法。我们通过实验结论,证明了这些算法的功效。
translated by 谷歌翻译
强化学习(RL)代理商广泛用于解决复杂的连续决策任务,但仍然表现出概括到培训期间未见的情景。在先前的在线方法证明,使用超出奖励功能的其他信号可以导致RL代理商中的更好的泛化能力,即使用自我监督学习(SSL),他们在离线RL设置中奋斗,即从静态数据集中学习。我们表明,由于观察之间的相似性差异差,可以在离线设置中阻碍用于RL的普遍的在线算法的性能。我们提出了一种称为广义相似性功能(GSF)的新的理论上动机框架,它使用对比学习来训练基于其预期未来行为的相似性的离线RL代理,以便使用\ EMPH {广义值来量化此相似性。职能}。我们表明GSF足以恢复现有的SSL目标,同时还可以在复杂的离线RL基准,离线Procgen上提高零拍泛化性能。
translated by 谷歌翻译
Classical methods for acoustic scene mapping require the estimation of time difference of arrival (TDOA) between microphones. Unfortunately, TDOA estimation is very sensitive to reverberation and additive noise. We introduce an unsupervised data-driven approach that exploits the natural structure of the data. Our method builds upon local conformal autoencoders (LOCA) - an offline deep learning scheme for learning standardized data coordinates from measurements. Our experimental setup includes a microphone array that measures the transmitted sound source at multiple locations across the acoustic enclosure. We demonstrate that LOCA learns a representation that is isometric to the spatial locations of the microphones. The performance of our method is evaluated using a series of realistic simulations and compared with other dimensionality-reduction schemes. We further assess the influence of reverberation on the results of LOCA and show that it demonstrates considerable robustness.
translated by 谷歌翻译